33 research outputs found

    High Throughput Virtual Screening with Data Level Parallelism in Multi-core Processors

    Full text link
    Improving the throughput of molecular docking, a computationally intensive phase of the virtual screening process, is a highly sought area of research since it has a significant weight in the drug designing process. With such improvements, the world might find cures for incurable diseases like HIV disease and Cancer sooner. Our approach presented in this paper is to utilize a multi-core environment to introduce Data Level Parallelism (DLP) to the Autodock Vina software, which is a widely used for molecular docking software. Autodock Vina already exploits Instruction Level Parallelism (ILP) in multi-core environments and therefore optimized for such environments. However, with the results we have obtained, it can be clearly seen that our approach has enhanced the throughput of the already optimized software by more than six times. This will dramatically reduce the time consumed for the lead identification phase in drug designing along with the shift in the processor technology from multi-core to many-core of the current era. Therefore, we believe that the contribution of this project will effectively make it possible to expand the number of small molecules docked against a drug target and improving the chances to design drugs for incurable diseases.Comment: Information and Automation for Sustainability (ICIAfS), 2012 IEEE 6th International Conference o

    Classification of resources in an e-library using machine learning algorithms

    Get PDF
    Library is the heart of a university and students spend a large amount of time in library in search of knowledge. The trend of reading resources in printed materials such as books, journals and other research publications is gradually changing. Since it is an uneasy and time-consuming process, students are interested in soft materials such as e-journals, e books and other web based resources. Nowadays, in a library most of the resources in digital form are stored without any classification. They are not categorized or utilized by the users since it does not have any proper way to access or find appropriate material when the users' queries applied. Even though there are a lot of manual ways to access text based materials or resources in a library, they cannot be applied to the digital resources since it needs some kind of text mining and machine learning. This project addresses this issue through a closed domain question answering system for a resource pool in an e-library. As the initial step, the project uses a narrowed down search space by processing the abstracts of the resources. More than 300 abstracts are extracted along with their title and pre-processed. 75% of the data are used as training sets and the remaining are used for testing. Different machine learning techniques such as classification and clustering are applied with this large collection of textual data using Weikato Environment of Knowledge Analysis (WEKA) and their performance metrics and error rates were compared. The most suitable machine learning technique and the mode of testing for the textual data were selected and applied for training models as the solution for the classification problem of the electronic resources
    corecore